由于他们越来越多的可负担性,可移植性和360 {\ DEG}视野,全向360 {\ DEG}图像在计算机视觉,机器人和其他领域找到了许多有希望和激动人心的应用。用于存储,处理和可视化360 {\ DEG}图像的最常用格式是互连的投影(ERP)。然而,由360 {\ DEG}图像引入的非线性映射引入到ERP图像的失真仍然是一种屏障,其容纳作为传统透视图像的易于用作易用的屏障。当估计360 {\ DEG}光流时,这尤其相关,因为需要适当地减去失真。在本文中,我们提出了一种基于切线图像的360 {\ DEG}光学流量。我们的方法利用GNOMONIC投影将ERP图像局部转换为透视图像,并且通过投影将ERP图像均匀地对准CUBEMAP和常规ICOSAHEDRON顶点来逐步地进行逐步改进估计的360 {\ DEG}流场。我们的实验表明了我们所提出的方法的益处,这些方法都是定量和定性的。
translated by 谷歌翻译
360 {\ DEG}相机可以在单次拍摄中捕获完整的环境,这使得在许多计算机视觉任务中制作360 {\ DEG}图像诱人。然而,单眼深度估计仍然是360 {\ DEG}数据的挑战,特别是对于2K(2048 $ \倍1024美元)的高分辨率,这对于新颖的综合和虚拟现实应用很重要。基于CNN的基于CNN的方法不支持由于GPU存储器有限而导致的这种高分辨率。在这项工作中,我们提出了一种灵活的框架,用于使用切线图像的高分辨率360 {\ DEG}图像的单眼深度估计框架。我们将360 {\ DEG}输入图像投影到一组切线,产生透视图,这适用于最新,最准确的最先进的透视单眼深度估计器。我们使用可变形的多尺度对准再次重新组合各个深度估计,然后通过梯度域混合来提高视差估计的一致性。结果是具有高细节水平的密集,高分辨率360 {\ DEG}深度图,也适用于现有方法不支持的户外场景。
translated by 谷歌翻译
神经网络可以表示和准确地重建静态3D场景的辐射场(例如,NERF)。有几种作品将这些功能扩展到用单眼视频捕获的动态场景,具有很有希望的性能。然而,已知单眼设置是一个受限制的问题,因此方法依赖于数据驱动的前导者来重建动态内容。我们用飞行时间(TOF)相机的测量来替换这些前沿,并根据连续波TOF相机的图像形成模型引入神经表示。我们而不是使用加工的深度映射,我们模拟了原始的TOF传感器测量,以改善重建质量,避免低反射区域,多路径干扰和传感器的明确深度范围的问题。我们表明,这种方法改善了动态场景重建对错误校准和大型运动的鲁棒性,并讨论了现在可在现代智能手机上提供的RGB + TOF传感器的好处和限制。
translated by 谷歌翻译
2 Lambda Labs 3 Twitter Figure 1. HoloGAN learns to separate pose from identity (shape and appearance) only from unlabelled 2D images without sacrificing the visual fidelity of the generated images. All results shown here are sampled from HoloGAN for the same identities in each row but in different poses.
translated by 谷歌翻译
InputOutput Input Output Fig. 1. Unlike current face reenactment approaches that only modify the expression of a target actor in a video, our novel deep video portrait approach enables full control over the target by transferring the rigid head pose, facial expression and eye motion with a high level of photorealism.We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the irst to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modiied target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network ś thus taking full control of the
translated by 谷歌翻译
Robotic teleoperation is a key technology for a wide variety of applications. It allows sending robots instead of humans in remote, possibly dangerous locations while still using the human brain with its enormous knowledge and creativity, especially for solving unexpected problems. A main challenge in teleoperation consists of providing enough feedback to the human operator for situation awareness and thus create full immersion, as well as offering the operator suitable control interfaces to achieve efficient and robust task fulfillment. We present a bimanual telemanipulation system consisting of an anthropomorphic avatar robot and an operator station providing force and haptic feedback to the human operator. The avatar arms are controlled in Cartesian space with a direct mapping of the operator movements. The measured forces and torques on the avatar side are haptically displayed to the operator. We developed a predictive avatar model for limit avoidance which runs on the operator side, ensuring low latency. The system was successfully evaluated during the ANA Avatar XPRIZE competition semifinals. In addition, we performed in lab experiments and carried out a small user study with mostly untrained operators.
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
Learning enabled autonomous systems provide increased capabilities compared to traditional systems. However, the complexity of and probabilistic nature in the underlying methods enabling such capabilities present challenges for current systems engineering processes for assurance, and test, evaluation, verification, and validation (TEVV). This paper provides a preliminary attempt to map recently developed technical approaches in the assurance and TEVV of learning enabled autonomous systems (LEAS) literature to a traditional systems engineering v-model. This mapping categorizes such techniques into three main approaches: development, acquisition, and sustainment. We review the latest techniques to develop safe, reliable, and resilient learning enabled autonomous systems, without recommending radical and impractical changes to existing systems engineering processes. By performing this mapping, we seek to assist acquisition professionals by (i) informing comprehensive test and evaluation planning, and (ii) objectively communicating risk to leaders.
translated by 谷歌翻译
In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
translated by 谷歌翻译
Speech-driven 3D facial animation has been widely explored, with applications in gaming, character animation, virtual reality, and telepresence systems. State-of-the-art methods deform the face topology of the target actor to sync the input audio without considering the identity-specific speaking style and facial idiosyncrasies of the target actor, thus, resulting in unrealistic and inaccurate lip movements. To address this, we present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video and produces novel facial expressions matching the identity-specific speaking style and facial idiosyncrasies of the target actor. Specifically, we train a style-agnostic transformer on a large facial expression dataset which we use as a prior for audio-driven facial expressions. Based on this prior, we optimize for identity-specific speaking style based on a short reference video. To train the prior, we introduce a novel loss function based on detected bilabial consonants to ensure plausible lip closures and consequently improve the realism of the generated expressions. Through detailed experiments and a user study, we show that our approach produces temporally coherent facial expressions from input audio while preserving the speaking style of the target actors.
translated by 谷歌翻译